This notebook shows how to use 3DeeCellTracker to track cells in single mode.
The basic procedures:
Please run folloing codes according to the instructions
%load_ext autoreload
%autoreload 2
import os
import warnings
warnings.filterwarnings('ignore')
from IPython.core.display import display, HTML
from matplotlib.animation import FuncAnimation, ArtistAnimation
from CellTracker.tracker import Tracker, save_tracker, load_tracker
display(HTML("<style>.container { width:95% !important; }</style>"))
%matplotlib inline
Using TensorFlow backend.
Image parameters
Segmentation parameters
Tracking parameters
Paths
tracker = Tracker(
volume_num=50, siz_xyz=(512, 1024, 21), z_xy_ratio=9.2, z_scaling=10,
noise_level=20, min_size=100, beta_tk=300, lambda_tk=0.1, maxiter_tk=20,
folder_path=os.path.abspath("./worm1"), image_name="aligned_t%03i_z%03i.tif",
unet_model_file="unet3_pretrained.h5", ffn_model_file="ffn_pretrained.h5")
Create folders to store data, model, and results
tracker.make_folders()
Following folders were made under: /home/wen/PycharmProjects/3DeeCellTracker worm1/data worm1/auto_vol1 worm1/manual_vol1 worm1/track_information worm1/models worm1/unet worm1/track_results_SingleMode worm1/anim worm1/models/unet_weights
Prepare images
Modify the segmentation parameters (optional)
tracker.set_segmentation(noise_level=20, min_size=100)
Segmentation parameters were not modified
Segment cells at volume 1
tracker.load_unet()
tracker.segment_vol1()
Loaded the 3D U-Net model Load images with shape: (512, 1024, 21) Segmented volume 1 and saved it
Draw the results of segmentation (Max projection)
anim_seg = tracker.draw_segresult(percentile_high=99.8)
Segmentation results (max projection):
Show segmentation in each layer
HTML(anim_seg)
Manual correction
Move files to the folder
Load the manually corrected segmentation
tracker.load_manual_seg()
Loaded manual _segment at vol 1
Re-train the U-Net using the manual segmentation (optional)
tracker.retrain_unet()
Images were normalized Images were divided Data for training 3D U-Net were prepared WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT/lib/python3.7/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT/lib/python3.7/site-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where 144/144 [==============================] - 9s 61ms/step val_loss before retraining: 0.017810788574732013
Epoch 1/1 60/60 [==============================] - 72s 1s/step - loss: 0.0183 - val_loss: 0.0316 Epoch 1/1 60/60 [==============================] - 68s 1s/step - loss: 0.0146 - val_loss: 0.0178 val_loss updated from 0.017810788574732013 to [0.017790769653705258]
Epoch 1/1 60/60 [==============================] - 69s 1s/step - loss: 0.0133 - val_loss: 0.0283 Epoch 1/1 60/60 [==============================] - 67s 1s/step - loss: 0.0133 - val_loss: 0.0219 Epoch 1/1 60/60 [==============================] - 68s 1s/step - loss: 0.0125 - val_loss: 0.0346 Epoch 1/1 60/60 [==============================] - 68s 1s/step - loss: 0.0133 - val_loss: 0.0186 Epoch 1/1 60/60 [==============================] - 68s 1s/step - loss: 0.0114 - val_loss: 0.0180 Epoch 1/1 60/60 [==============================] - 69s 1s/step - loss: 0.0127 - val_loss: 0.0166 val_loss updated from [0.017790769653705258] to [0.016596044879002472]
Epoch 1/1 60/60 [==============================] - 67s 1s/step - loss: 0.0125 - val_loss: 0.0213 Epoch 1/1 60/60 [==============================] - 69s 1s/step - loss: 0.0147 - val_loss: 0.0178
tracker._select_weights(step=8)
tracker.set_segmentation(noise_level=20, min_size=100, reset_=True)
tracker.segment_vol1()
anim_seg = tracker.draw_segresult(percentile_high=99.8)
Segmentation parameters were not modified All files under /unet folder were deleted Load images with shape: (512, 1024, 21) Segmented volume 1 and saved it Segmentation results (max projection):
Interpolate cells to make more accurate/smooth cell boundary
tracker.interpolate_seg()
tracker.draw_manual_seg1()
Interpolating... cell:164
Initiate variables required for tracking
tracker.cal_subregions()
tracker.check_multicells()
tracker.load_ffn()
tracker.initiate_tracking()
Calculating subregions... cell: 164 Checked mistakes of multiple cells as one: Correct! Loaded the FFN model Initiated coordinates for tracking (from vol 1)
Modify tracking parameters if the test result is not satisfied (optional)
tracker.set_tracking(beta_tk=300, lambda_tk=0.1, maxiter_tk=20)
Tracking parameters were not modified
Test a matching between volume 1 and a target volume, and show the FFN + PR-GLS process by an animation (5 iterations)
anim_tracking, results = tracker.match(target_volume=50)
HTML(anim_tracking)
Matching between vol 1 and vol 50 was computed
Show the accurate correction after the FFN + PR-GLS transformation
tracker.draw_correction(*results[2:])
Show the superimposed cells + labels before/after tracking
tracker.draw_overlapping(*results[:3])
Track and show the processes
%matplotlib notebook
from_volume=2
ax, fig = tracker.track_animation(from_volume)
tracker.track_and_confirm(from_volume, fig, ax)
Show the processes as an animation (for diagnosis)
%matplotlib inline
track_anim = tracker.track_animation_replay(from_volume=2)
HTML(track_anim)